5,387 research outputs found

    Pipelined genetic propagation

    Get PDF
    © 2015 IEEE.Genetic Algorithms (GAs) are a class of numerical and combinatorial optimisers which are especially useful for solving complex non-linear and non-convex problems. However, the required execution time often limits their application to small-scale or latency-insensitive problems, so techniques to increase the computational efficiency of GAs are needed. FPGA-based acceleration has significant potential for speeding up genetic algorithms, but existing FPGA GAs are limited by the generational approaches inherited from software GAs. Many parts of the generational approach do not map well to hardware, such as the large shared population memory and intrinsic loop-carried dependency. To address this problem, this paper proposes a new hardware-oriented approach to GAs, called Pipelined Genetic Propagation (PGP), which is intrinsically distributed and pipelined. PGP represents a GA solver as a graph of loosely coupled genetic operators, which allows the solution to be scaled to the available resources, and also to dynamically change topology at run-time to explore different solution strategies. Experiments show that pipelined genetic propagation is effective in solving seven different applications. Our PGP design is 5 times faster than a recent FPGA-based GA system, and 90 times faster than a CPU-based GA system

    The LUT-SR Family of Uniform Random Number Generators for FPGA Architectures

    Get PDF

    Multiplierless Algorithm for Multivariate Gaussian Random Number Generation in FPGAs

    Get PDF

    A domain specific approach to high performance heterogeneous computing

    No full text
    Users of heterogeneous computing systems face two problems: first, in understanding the trade-off relationships between the observable characteristics of their applications, such as latency and quality of the result, and second, how to exploit knowledge of these characteristics to allocate work to distributed computing platforms efficiently. A domain specific approach addresses both of these problems. By considering a subset of operations or functions, models of the observable characteristics or domain metrics may be formulated in advance, and populated at run-time for task instances. These metric models can then be used to express the allocation of work as a constrained integer program. These claims are illustrated using the domain of derivatives pricing in computational finance, with the domain metrics of workload latency and pricing accuracy. For a large, varied workload of 128 Black-Scholes and Heston model-based option pricing tasks, running upon a diverse array of 16 Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both the makespan and accuracy are generally within 10 percent of the run-time performance. When these models are used as inputs to machine learning and MILP-based workload allocation approaches, a latency improvement of up to 24 and 270 times over the heuristic approach is seen

    Consistent cosmological structure formation on all scales in relativistic extensions of MOND

    Get PDF
    General relativity manifests very similar equations in different regimes, notably in large scale cosmological perturbation theory, non-linear cosmological structure formation, and in weak field galactic dynamics. The same is not necessarily true in alternative gravity theories, in particular those that possess MONDian behaviour (“relativistic extensions” of MOND). In these theories different regimes are typically studied quite separately, sometimes even with the freedom in the theories chosen differently in different regimes. If we wish to properly and fully test complete cosmologies containing MOND against the ΛCDM paradigm then we need to understand cosmological structure formation on all scales, and do so in a coherent and consistent manner. We propose a method for doing so and apply it to generalised Einstein-Aether theories as a case study. We derive the equations that govern cosmological structure formation on all scales in these theories and show that the same free function (which may contain both Newtonian and MONDian branches) appears in the cosmological background, linear perturbations, and non-linear cosmological structure formation. We show that MONDian behaviour on galactic scales does not necessarily result in MONDian behaviour on cosmological scales, and for MONDian behaviour to arise cosmologically, there will be no modification to the Friedmann equations governing the evolution of the homogeneous cosmological background. We comment on how existing N-body simulations relate to complete and consistent generalised Einstein-Aether cosmologies. The equations derived in this work allow consistent cosmological N-body simulations to be run in these theories whether or not MONDian behaviour manifests on cosmological scales

    Spin-based removal of instrumental systematics in 21cm intensity mapping surveys

    Get PDF
    Upcoming cosmological intensity mapping surveys will open new windows on the Universe, but they must first overcome a number of significant systematic effects, including polarization leakage. We present a formalism that uses scan strategy information to model the effect of different instrumental systematics on the recovered cosmological intensity signal for `single-dish' (autocorrelation) surveys. This modelling classifies different systematics according to their spin symmetry, making it particularly relevant for dealing with polarization leakage. We show how to use this formalism to calculate the expected contamination from different systematics as a function of the scanning strategy. Most importantly, we show how systematics can be disentangled from the intensity signal based on their spin properties via map-making. We illustrate this, using a set of toy models, for some simple instrumental systematics, demonstrating the ability to significantly reduce the contamination to the observed intensity signal. Crucially, unlike existing foreground removal techniques, this approach works for signals that are non-smooth in frequency, e.g. polarized foregrounds. These map-making approaches are simple to apply and represent an orthogonal and complementary approach to existing techniques for removing systematics from upcoming 21cm intensity mapping surveys.Comment: 19 pages, 14 Figures, 2 Tables, published in MNRA

    Spin characterization of systematics in CMB surveys – a comprehensive formalism

    Get PDF
    The CMB B-mode polarization signal – both the primordial gravitational wave signature and the signal sourced by lensing – is subject to many contaminants from systematic effects. Of particular concern are systematics that result in mixing of signals of different ‘spin’, particularly leakage from the much larger spin-0 intensity signal to the spin-2 polarization signal. We present a general formalism, which can be applied to arbitrary focal plane setups, that characterizes signals in terms of their spin. We provide general expressions to describe how spin-coupled signals observed by the detectors manifest at map-level, in the harmonic domain, and in the power spectra, focusing on the polarization spectra – the signals of interest for upcoming CMB surveys. We demonstrate the presence of a previously unidentified cross-term between the systematic and the intrinsic sky signal in the power spectrum, which in some cases can be the dominant source of contamination. The formalism is not restricted to intensity to polarization leakage but provides a complete elucidation of all leakage including polarization mixing, and applies to both full and partial (masked) sky surveys, thus covering space-based, balloon-borne, and ground-based experiments. Using a pair-differenced setup, we demonstrate the formalism by using it to completely characterize the effects of differential gain and pointing systematics, incorporating both intensity leakage and polarization mixing. We validate our results with full time ordered data simulations. Finally, we show in an Appendix that an extension of simple binning map-making to include additional spin information is capable of removing spin-coupled systematics during the map-making process

    System-level linking of synthesised hardware and compiled software using a higher-order type system

    Get PDF
    Devices with tightly coupled CPUs and FPGA logic allow for the implementation of heterogeneous applications which combine multiple components written in hardware and software languages, including first-party source code and third-party IP. Flexibility in component relationships is important, so that the system designer can move components between software and hardware as the application design evolves. This paper presents a system-level type system and linker, which allows functions in software and hardware components to be directly linked at link time, without requiring any modification or recompilation of the components. The type system is designed to be language agnostic, and exhibits higher-order features, to enables design patterns such as notifications and callbacks to software from within hardware functions. We demonstrate the system through a number of case studies which link compiled software against synthesised hardware in the Xilinx Zynq platform

    Programming Model to Develop Supercomputer Combinatorial Solvers

    Get PDF
    © 2017 IEEE. Novel architectures for massively parallel machines offer better scalability and the prospect of achieving linear speedup for sizable problems in many domains. The development of suitable programming models and accompanying software tools for these architectures remains one of the biggest challenges towards exploiting their full potential. We present a multi-layer software abstraction model to develop combinatorial solvers on massively-parallel machines with regular topologies. The model enables different challenges in the design and optimization of combinatorial solvers to be tackled independently (separation of concerns) while permitting problem-specific tuning and cross-layer optimization. In specific, the model decouples the issues of inter-node communication, n ode-level scheduling, problem mapping, mesh-level load balancing and expressing problem logic. We present an implementation of the model and use it to profile a Boolean satisfiability solver on simulated massively-parallel machines with different scales and topologies
    corecore